57 research outputs found
Detecting and defending against cyber attacks in a smart home Internet of Things ecosystem
The proliferation in Internet of Things (IoT) devices is demonstrated by their prominence
in our daily lives. Although such devices simplify and automate everyday tasks,
they also introduce tremendous security flaws. Current security measures are insufficient,
making IoT one of the weakest links to breaking into a secure infrastructure
which can have serious consequences. Subsequently, this thesis is motivated by the
need to develop and further enhance novel mechanisms tailored towards strengthening
the overall security infrastructures of IoT ecosystems.
To estimate the degree to which a hub can improve the overall security of the ecosystem,
this thesis presents a design and prototype implementation of a novel secure
IoT hub, consisting of various built-in security mechanisms that satisfy key security
properties (e.g. authentication, confidentiality, access control) applicable to a range of
devices. The effectiveness of the hub was evaluated within a smart home IoT network
upon which popular cyber attacks were deployed.
To further enhance the security of the IoT environment, the initial experiments towards
the development of a three-layered Intrusion Detection System (IDS) is proposed. The
IDS aims to: 1) classify IoT devices, 2) identify malicious or benign network packets,
and 3) identify the type of attack which has occurred. To support the classification
experiments, real network data was collected from a smart home testbed, where a range
of cyber attacks from four main attack types were targeted towards the devices.
Lastly, the robustness of the IDS was further evaluated against Adversarial Machine
Learning (AML) attacks. Such attacks may target models by generating adversarial
samples which aim to exploit the weaknesses of the pre-trained model, consequently
bypassing the detector. This thesis presents a first approach towards automatically
generating adversarial malicious DoS IoT network packets. The analysis further explores how
adversarial training can enhance the robustness of the IDS
Secure data sharing and analysis in cloud-based energy management systems
Analysing data acquired from one or more buildings (through specialist sensors, energy generation capability such as PV panels or smart meters) via a cloud-based Local Energy Management System (LEMS) is increasingly gaining in popularity. In a LEMS, various smart devices within a building are monitored and/or controlled to either investigate energy usage trends within a building, or to investigate mechanisms to reduce total energy demand. However, whenever we are connecting externally monitored/controlled smart devices there are security and privacy concerns. We describe the architecture and components of a LEMS and provide a survey of security and privacy concerns associated with data acquisition and control within a LEMS. Our scenarios specifically focus on the integration of Electric Vehicles (EV) and Energy Storage Units (ESU) at the building premises, to identify how EVs/ESUs can be used to store energy and reduce the electricity costs of the building. We review security strategies and identify potential security attacks that could be carried out on such a system, while exploring vulnerable points in the system. Additionally, we will systematically categorize each vulnerability and look at potential attacks exploiting that vulnerability for LEMS. Finally, we will evaluate current counter measures used against these attacks and suggest possible mitigation strategies
Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems
The proliferation and application of machine learning based Intrusion
Detection Systems (IDS) have allowed for more flexibility and efficiency in the
automated detection of cyber attacks in Industrial Control Systems (ICS).
However, the introduction of such IDSs has also created an additional attack
vector; the learning models may also be subject to cyber attacks, otherwise
referred to as Adversarial Machine Learning (AML). Such attacks may have severe
consequences in ICS systems, as adversaries could potentially bypass the IDS.
This could lead to delayed attack detection which may result in infrastructure
damages, financial loss, and even loss of life. This paper explores how
adversarial learning can be used to target supervised models by generating
adversarial samples using the Jacobian-based Saliency Map attack and exploring
classification behaviours. The analysis also includes the exploration of how
such samples can support the robustness of supervised models using adversarial
training. An authentic power system dataset was used to support the experiments
presented herein. Overall, the classification performance of two widely used
classifiers, Random Forest and J48, decreased by 16 and 20 percentage points
when adversarial samples were present. Their performances improved following
adversarial training, demonstrating their robustness towards such attacks.Comment: 9 pages. 7 figures. 7 tables. 46 references. Submitted to a special
issue Journal of Information Security and Applications, Machine Learning
Techniques for Cyber Security: Challenges and Future Trends, Elsevie
Enhancing Enterprise Network Security: Comparing Machine-Level and Process-Level Analysis for Dynamic Malware Detection
Analysing malware is important to understand how malicious software works and
to develop appropriate detection and prevention methods. Dynamic analysis can
overcome evasion techniques commonly used to bypass static analysis and provide
insights into malware runtime activities. Much research on dynamic analysis
focused on investigating machine-level information (e.g., CPU, memory, network
usage) to identify whether a machine is running malicious activities. A
malicious machine does not necessarily mean all running processes on the
machine are also malicious. If we can isolate the malicious process instead of
isolating the whole machine, we could kill the malicious process, and the
machine can keep doing its job. Another challenge dynamic malware detection
research faces is that the samples are executed in one machine without any
background applications running. It is unrealistic as a computer typically runs
many benign (background) applications when a malware incident happens. Our
experiment with machine-level data shows that the existence of background
applications decreases previous state-of-the-art accuracy by about 20.12% on
average. We also proposed a process-level Recurrent Neural Network (RNN)-based
detection model. Our proposed model performs better than the machine-level
detection model; 0.049 increase in detection rate and a false-positive rate
below 0.1.Comment: Dataset link: https://github.com/bazz-066/cerberus-trac
Hardening machine learning Denial of Service (DoS) defences against adversarial attacks in IoT smart home networks
Machine learning based Intrusion Detection Systems (IDS) allow flexible and efficient automated detection of cyberattacks in Internet of Things (IoT) networks. However, this has also created an additional attack vector; the machine learning models which support the IDS's decisions may also be subject to cyberattacks known as Adversarial Machine Learning (AML). In the context of IoT, AML can be used to manipulate data and network traffic that traverse through such devices. These perturbations increase the confusion in the decision boundaries of the machine learning classifier, where malicious network packets are often miss-classified as being benign. Consequently, such errors are bypassed by machine learning based detectors, which increases the potential of significantly delaying attack detection and further consequences such as personal information leakage, damaged hardware, and financial loss. Given the impact that these attacks may have, this paper proposes a rule-based approach towards generating AML attack samples and explores how they can be used to target a range of supervised machine learning classifiers used for detecting Denial of Service attacks in an IoT smart home network. The analysis explores which DoS packet features to perturb and how such adversarial samples can support increasing the robustness of supervised models using adversarial training. The results demonstrated that the performance of all the top performing classifiers were affected, decreasing a maximum of 47.2 percentage points when adversarial samples were present. Their performances improved following adversarial training, demonstrating their robustness towards such attacks
Machine learning detection of cloud services abuse as C&C Infrastructure
The proliferation of cloud and public legitimate services (CLS) on a global scale has resulted in increasingly sophisticated malware attacks that abuse these services as command-and-control (C&C) communication channels. Conventional security solutions are inadequate for detecting malicious C&C traffic because it blends with legitimate traffic. This motivates the development of advanced detection techniques. We make the following contributions: First, we introduce a novel labeled dataset. This dataset serves as a valuable resource for training and evaluating detection techniques aimed at identifying malicious bots that abuse CLS as C&C channels. Second, we tailor our feature engineering to behaviors indicative of CLS abuse, such as connections to known CLS domains and potential C&C API calls. Third, to identify the most relevant features, we introduced a custom feature elimination (CFE) method designed to determine the exact number of features needed for filter selection approaches. Fourth, our approach focuses on both static and derivative features of Portable Executable (PE) files. After evaluating various machine learning (ML) classifiers, the random forest emerges as the most effective classifier, achieving a 98.26% detection rate. Fifth, we introduce the “Replace Misclassified Parameter (RMCP)” adversarial attack. This white-box strategy is designed to evaluate our system’s detection robustness. The RMCP attack modifies feature values in malicious samples to make them appear as benign samples, thereby bypassing the ML model’s classification while maintaining the malware’s malicious capabilities. The results of the robustness evaluation demonstrate that our proposed method successfully maintains a high accuracy level of 84%. In sum, our comprehensive approach offers a robust solution to the growing threat of malware abusing CLS as C&C infrastructure
Dynamic real-time risk analytics of uncontrollable states in complex internet of things systems, cyber risk at the edge
The Internet of Things (IoT) triggers new types of cyber risks. Therefore,
the integration of new IoT devices and services requires a self-assessment of
IoT cyber security posture. By security posture this article refers to the
cybersecurity strength of an organisation to predict, prevent and respond to
cyberthreats. At present, there is a gap in the state of the art, because there
are no self-assessment methods for quantifying IoT cyber risk posture. To
address this gap, an empirical analysis is performed of 12 cyber risk
assessment approaches. The results and the main findings from the analysis is
presented as the current and a target risk state for IoT systems, followed by
conclusions and recommendations on a transformation roadmap, describing how IoT
systems can achieve the target state with a new goal-oriented dependency model.
By target state, we refer to the cyber security target that matches the generic
security requirements of an organisation. The research paper studies and adapts
four alternatives for IoT risk assessment and identifies the goal-oriented
dependency modelling as a dominant approach among the risk assessment models
studied. The new goal-oriented dependency model in this article enables the
assessment of uncontrollable risk states in complex IoT systems and can be used
for a quantitative self-assessment of IoT cyber risk posture
- …